Hello, World!
For those of you who enforce my Sundays on me (keep doing that, thank you!), I ll be changing my Saturdays with my Sundays.
That s right! In this new brave world, I ll be taking Saturdays off, not Sundays. Feel free to pester me all day on Sunday, now!
This means, as a logical result, I will not be around tomorrow, Saturday.
Much love.
More hardware adventures.
I got my Dell XPS13. Amazing.
The good news: This MacBook Air clone is clearly an Air competitor, and easily slightly better in nearly every regard except for the battery.
The bad news is that the Intel Wireless card needs non-free (I ll be replacing that shortly), and the touchpad s driver isn t totally implemented until Kernel 3.16. I m currently building a 3.14 kernel with the patch to send to the kind Debian kernel people. We ll see if that works. Ubuntu Trusty already has the patch, but it didn t get upstreamed. That kinda sucks.
It also shipped with UEFI disabled, and was defaulting to boot in legacy mode. It shipped with Ubuntu, a bit disappointed to not see Ubuntu keys on the machine.
Touchscreen works; in short -stunning. I think I found my new travel buddy. Debian unstable runs great, stable had some issues.
A few interesting things happened after I got a macbook air.
Firstly, I got a lot of shit from my peers and friends about it. This was funny to me, nothing really bothered me about it, but I can see this becoming really tiresome at events like hackathons or conferences.
As a byproduct, there s a strong feeling in the hardcore F/OSS world that Apple hardware is the incarnation of evil.
As a result of both of the above, hardcore F/OSS (and Distro hackers) don t buy apple hardware.
Therefore, GNU/Linux is complete garbage on Apple hardware. Apple s firmware bugs don t help, but we re BAD.
Some might ask why this is a big deal. The fact is, this is one of the most used platforms for Open Source development (note I used that term exactly).
Are we to damn these users to a nonfree OS because we want to maintain our purity?
I had to give back my Air, but I still have a Mac Mini that i ve been using for testing bugs on OSX in code I have. Very soon, my Mac Mini will be used to help fix the common bugs in the install process.
Some things you can do:
Consider not giving off an attitude to people with Apple hardware. Be welcoming.
Consider helping with supporting your favorate distro on Apple hardware. Props to Fedora for doing such a great job, in particular, mjg59 and Peter Jones for all they do with it.
One which ends in tears, I m afraid.
A week or so ago, I got an MacBook Air 13 (MacBook Air 6-2), to take with me when I head out to local hack sessions, and when I travel out of state for short lengths of time. My current Thinkpad T520i is an amazing machine, and will remain my daily driver for a while to come.
After getting it, I unboxed the machine, and put in a Debian install USB key. I wiped OSX (without even booting into it) and put Debian on the drive.
To my dismay, it didn t boot up after install. Using the recovery mode of the disk, I chrooted in and attempted an upgrade (to ensure I had an up-to-date GRUB). I couldn t dist-upgrade, the terminal went white. After C-c ing the terminal, I saw something about systemd s sysvinit shim, so I manually did a purge of sysvinit and install of systemd.
I hear this has been resolved now. Thanks :)
My machine still wasn t booting, so I checked around. Turns out I needed to copy over the GRUB EFI blob to a different location to allow it to boot. After doing that, I could boot Debian whilst holding Alt.
After booting, things went great! Until I shut my lid. After that point, my screen was either 0% bright (absolutely dark) or 100% (facemeltingly bright).
I saw mba6x_bl, which claims to fix it, and has many happy users. If you re in a similar suggestion, I highly suggest looking at it.
Unfortunately, this caused my machine to black out on boot and I couldn t see the screen at all anymore. A rescue disk later, and I was back.
Annoyingly, the bootup noise s volume is stored in NVRAM, which gets written out by OSX when it shuts down. After consulting the inimitableMatthew Garrett, I was told the NVRAM can be hacked about with by mounting efivarfs on /sys/firmware/efi/efivars type efivarfs. Win!
After hitting enter, I got a kernel panic and some notes about hardware assertions failing.
That s when I returned my MacBook and got a Dell XPS 13.
This is a problem, folks.
If I can t figure out how to make this work, how can we expect our users?
Such a shame the most hardware on the planet gets no support, locking it s users into even less freedom.
I just did some Debian package review in a somewhat unusual way, and I
wanted to share that. I'm hoping other Debian developers (and other free
software contributors) that need to review others' contributions can
learn something from this, and that I can use this blog post as a way to
find out if other people are doing something similar.
It was pretty exciting! At the end of it, I joined #debian-mentors to talk
about how my cool process. Someone summarized it very accurately:
<sney> it almost sounds like you're working to replace yourself with automation
Context about alpine in Debian
(Skip to "Package review, with automation" if you're familiar with Debian.)
I'm the maintainer of alpine in Debian. There are quite a few problems
with the alpine package in Debian right now, the biggest of which are:
We're one version behind -- 2.11 is the latest available, but 2.10 is the newest that we have in Debian.
The packaging uses a decreasingly-popular packaging helper, cdbs, about which I happen to know less than the dh-style helper (aka dh7).
There are lots of bugs filed, and I don't respond in a timely fashion.
This doesn't change my deep love for alpine -- I've had that for about
half my life now, and so far, I don't see it going away.
A month or so ago, I got a friendly private message from Unit193, saying
he had converted the package to the dh style, and also packaged the
newer version. They wanted to know if they should clean this up into
something high-enough quality to land in Debian.
(In Debian, we have a common situation where enthusiastic users update
or create a new package, and aren't yet Debian Developers, so they don't
have permission to upload that directly to the "Debian archive", which
is the Debian equivalent of git master. Package "sponsorship" is how we
handle that -- a Debian Developer reviews the package, makes sure it is
of high quality, and uploads it to the Debian archive along with the
Debian Developer's OpenPGP signature, so the archive processing tools
know to trust it.)
On Friday evening, I had a spare moment, so I sent a private message to
Unit193 apologizing for not getting back to them in a reasonable amount
of time. Having another person help maintain is a pretty exciting
prospect, and I wanted to treat that enthusiasm with the respect it
deserves, or at least apologize when I haven't. I was surprised to see a
reply within a few minutes. At that point, I thought: I wasn't
planning on doing any package review this weekend, but if they're
online and I'm online... might as well!
Package review, with automation
Unit193 and I popped into ##alpine on irc.freenode.net, and I started
reading through their packaging changes, asking questions. As I asked
questions, I wondered -- how will I know if they are going to fix the
issues I'm raising?
Luckily, Unit193 wanted to use git to track the packaging, and we
settled on using git-buildpackage, a tool that was fairly new to both of
us. I thought, I might as well have some executable documentation so I
don't forget how to use it. ("Executable documentation" is Asheesh-speak
for a shell script.)
One thing I knew was that I'd have to test the package in a pbuilder, or
other pristine build environment. But all I had on me at the moment was
my work laptop, which didn't have one set up. Then I had a bright idea:
I could use Travis-CI, a public continuous
integration service, to check Unit193's packaging. If I had any
concerns, I could add them to the shell script and then point at the
build log and say, "This needs to be fixed." Then there would be great
clarity about the problems.
Some wonderful things about Travis-CI:
They give you root access on an Ubuntu Precise (10.04) virtual machine.
Their build hosts are well-connected to the Internet, which means fast downloads in e.g. pbuilder.
They will let you run a build for up to 50 minutes, for free.
Build just means "command" or "set of commands," so you can just write a shell script and they will run it.
Travis-CI will watch a github.com repository, if you like. This means you can 'git commit --allow-empty' then 'git push' and ask it to re-run your script.
Since Unit193's packaging was in git (but not on github), I created a
git repo containing the same contents, where I would experiment with
fixes for packaging problems I found. It'd be up to Unit193 to fix the
problems in the Alioth packaging. This way, I would be providing advice,
and Unit193 would have an opportunity to ask questions, so it would be more
like mentorship and less like me fixing things.
We did a few rounds of feedback this way, and got the packaging to higher
and higher quality. Every time Unit193 made a fix and pushed it, I would
re-run the auto-build, and see if the problems I spotted had gone away.
While the auto-build runs, I can focus on conversing with my mentee
about problems or just generally chatting. Chatting is valuable
community-building! It's extremely nice that I can do that while waiting
on the build, knowing that I don't have to read it carefully -- I can
just wait a few minutes, then see if it's done, and see if it's red or
green. Having the mentee around while I'm reviewing it means that I can
use the time I'm waiting on builds as fun free software social time.
(Contrast this with asynchronous review, where, all alone, I would wait
for a build to finish, then write up an email at the end of it all.)
This kind of mentorship + chatting was spread out over Friday night,
Saturday night, and Sunday morning. By the end of it, we had a superb
package that I'm excited to sign and push into Debian when I'm next near
my OpenPGP key.
Implementation details
You can see the final shell script here:
Alternates between the Alioth packaging vs. my fork of it. (This way, I can test packaging changes/suggestions.)
Disables ccache in pbuilder, due to a permissions problem with ccache/pbuilder/travis-ci, and I didn't need ccache anyway.
Handles 'git dch' slightly wrong. I need to figure that out.
Optionally passes --git-ignore-new to git-buildpackage, which was required initially, but should not be required by the time the package is ready. (This is an example of a thing I might forget to remark upon to my mentee.)
Plays games with git branches so that git-buildpackage on Travis-CI can find the pristine-tar branch.
Tries to use cdn.debian.net as its mirror, but on Saturday ran into problems with whicever mirror that is, so it falls back to mirror.mit.edu in case that fails.
Contains a GPG homedir, and imports the Debian archive key, so that it can get past Ubuntu-Debian pbuilder trust issues.
I also had a local shell script that would run, effectively:
git commit --allow-empty -m 'Trigger build'
git push
This was needed since I was basically using Travis-CI as remote shell
service -- moreover, the scripts Travis-CI runs are in a different repo
(travis-debcheck)
than the software I'm actually testing (collab-maint/alpine.git).
Unit193 and I had a technical disagreement at one point, and I realized
that rather than discuss it, I could just ask Travis-CI to test which
one of us was right. At one point in the revisions, the binary package
build failed to build on Precise Pangolin (the Ubuntu release that the
Travis-CI worker is running), and Unit193 said that it was probably due
to a problem with building on Ubuntu. So I added a configuration option
to build just the source package in Ubuntu, keeping the binary package test-build within the Debian sid pbuilder, although I believed
that there was actually a problem with the packaging. This way, I could
modify the script so that I could demonstrate the problem could be
reproduced in a sid pbuilder. Of course, by the time I got that far,
Unit193 had figured out that it was indeed a packaging bug.
I also created an option to SKIP_PBUILDER; initially, I wanted to get
quick automated feedback on the quality of the source package without
waiting for pbuilder to create the chroot and for the test build to
happen.
You might notice that the script is not very secure -- Niels Thykier
already did! That's fine by me; it's only Travis-CI's machines that
could be worsened by that insecurity, and really, they already gave me a
root shell with no password. (This might sound dismissive of Travis-CI
-- I don't mean it to be! I just mean that their security model already
presumably involves throwing away the environment in which my code is
executing, and I enjoy taking advantage of that.)
Since the Travis virtual machine is Ubuntu, and we want to run the
latest version of lintian (a Debian packaging "lint" checker), we run
lintian within the Debian sid pbuilder. To do that, I use the glorious
"B90lintian" sample pbuilder hook script, which comes bundled with
pbuilder in /usr/share/doc/pbuilder/.
The full build, which includes creating a sid pbuilder from scratch,
takes merely 7-10 minutes. I personally find this kind of shockingly
speedy -- in 2005, when I first got involved, doing a pbuilder build
seemed like it would take forever. Now, a random free shell service on
the Internet will create a pbuilder, and do a test build within it, in
about 5 minutes.
Package review, without automation
I've done package review for other mentees in the past. I tend to do it
in a very bursty fashion -- one weekend day or one weeknight I decide
sure, it's a good day to read Debian packages and provide feedback.
Usually we do it asynchronously on the following protocol:
I dig up an email from someone who needed review.
I read through the packaging files, doing a variety of checks as they occur to me.
If I find problems, I write an email about them to the mentee. If not, success! I sign and upload the package.
There are some problems with the above:
The burstiness means that if someone fixes the issues, I might not have time to re-review for another month or longer.
The absence of an exhaustive list of things to look for means that I could fail to provide that feedback in the first round of review, leading to a longer wait time.
The person receiving the email might not understand my comments, which could interact really badly with the burstiness.
I did this for Simon Fondrie-Teitler's python-pypump package recently.
We followed the above protocol. I wrote a long email to Simon, where I
remarked on various good and bad points of the packaging. It was part
commentary, part terminal transcript -- I use the terminal transcripts
to explain what I mean. This is part of the email I sent:
I got an error in the man page generation phase -- because at
build-time, I don't have requests-oauthlib:
make[2]: Leaving directory /tmp/python-pypump-0.5-1+dfsg/docs'
help2man --no-info \
-n 'sets up an environment and oauth tokens and allows for interactive testing' \
--version-string=0.5.1 /tmp/python-pypump-0.5-1+dfsg/pypump-shell > /tmp/python-pypump-0.5-1+dfsg/debian/pypump-shell.1
help2man: can't get --help' info from /tmp/python-pypump-0.5-1+dfsg/pypump-shell
Try --no-discard-stderr' if option outputs to stderr
make[1]: *** [override_dh_auto_build] Error 1
This seems to be because:
python-pypump-0.5-1+dfsg ./pypump-shell
Traceback (most recent call last):
File "./pypump-shell", line 26, in <module>
from pypump import PyPump, Client
File "/tmp/python-pypump-0.5-1+dfsg/pypump/__init__.py", line 19, in <module>
from pypump.pypump import PyPump, WebPump
File "/tmp/python-pypump-0.5-1+dfsg/pypump/pypump.py", line 28, in <module>
from six.moves.urllib import parse
ImportError: No module named urllib
$ ./pypump-shell
Traceback (most recent call last):
File "./pypump-shell", line 26, in <module>
from pypump import PyPump, Client
File "/tmp/python-pypump-0.5-1+dfsg/pypump/__init__.py", line 19, in <module>
from pypump.pypump import PyPump, WebPump
File "/tmp/python-pypump-0.5-1+dfsg/pypump/pypump.py", line 29, in <module>
from requests_oauthlib import OAuth1
ImportError: No module named requests_oauthlib
The deeper problem was a missing build-dependency, and I explained that
in my email. But the meta problem is that Simon didn't try building
this in a pbuilder, or otherwise clean build environment.
Simon fixed these problems, and submitted a fresh package to
Paul Tagliamonte and myself. It happened to
have some typos in the names of the new build dependencies. Paul
reviewed the fixed package, noticed the typos, fixed them, and uploaded
it. Simon had forgotten to do a test build the second time, too, which
is an understandable human failure. There was a two-day delay between
Simon's fixed resubmission, and Paul signing+uploading the fixed result.
In a pedagogical sense, there's something disappointing about that
exchange for me: Paul fixed an error Simon introduced, so we're not
teaching Simon to take total responsibility for his packages in Debian,
nor to understand the Debian system as well as he could. (Luckily, I
think Simon already understands the importance of taking responsibility!
In this case, it's just a hypothetical in this case.)
For the future
The next time I review a package, I'm going to try to do something
similar to my Travis-CI hack. It would be nice to have the do.sh script
be a little more abstract; I imagine that as I try to use it for a
different package, I'll discover the right abstractions.
I'd love it if Travis-CI did not require the git repositories to be on
GitHub. I'd also like if the .travis.yml file could be in a different
path. If so, we could create debian/travis-configuration (or something)
and keep the packaging files nice and separate from the upstream source.
I'd also love to hear about other people's feedback. Are you doing
something similar? Do you want to be? Would you have done some of this
differently? Leave a comment here, or ping me (paulproteus) on #debian-mentors
on irc.debian.org (aka irc.oftc.net).
I'll leave you with some conversation from #debian-mentors:
<paulproteus> The automation here, I think, is really interesting.
<paulproteus> What I really want is for mentees to show up to me and say "Here is my package + build log with pbuilder, can you sponsor it please?"
<Unit193> Oooooh!
-*- Unit193 gets ideas.
<paulproteus> Although the irony is that I actually like the community-building and relationship-building nature of having these things be conversations.
<bremner> how will this brave new world cope with licensing issues?
<paulproteus> bremner: It's not a replacement for actual review, just a tool-assist.
<paulproteus> bremner: You might be relieved to know that much of Unit193's and my back and forth related to get-orig-source and licensing. (-:
<bremner> I didn't doubt you ;).
<paulproteus> If necessary I can just be a highly productive reviewer, but I would prefer to figure out some way that I can get other non-paulproteus people to get a similar assist.
<paulproteus> I think the current blocker is "omg travis why are you bound to githubbbbbbbb" which is a reasonable concern.
This week, I started work on something I m calling moxie. Due to wanting to use my aiodocker bindings on the backend, I decided to implement it in 100% AsyncIO Python 3.4.
What pushed me over the edge was finding the aiopg driver (postgres asyncio bindings), with very (let me stress - very) immature SQLAlchemy support.
Unfortunately, no web frameworks support asyncio as a first-class member of the framework, so I was forced into writing a microframework. The resulting app looks pretty not bad, and likely easy to switch if Flask ever gets support for asyncio.
One neat side-effect was that the framework can support stuff like websockets as a first-class element of the framework, just like GET requests.
Moxie will be a tool to run periodic long-running jobs in a sane way using docker.io.
More soon!
It is time for a new Tanglu update, which has been overdue for a long time now!
Many things happened in Tanglu development, so here is just a short overview of what was done in the past months.
Infrastructure
Debile
The whole Tanglu distribution is now built with Debile, replacing Jenkins, which was difficult to use for package building purposes (although Jenkins is great for other things). You can see the Tanglu builders in action at buildd.tg.o.
The migration to Debile took a lot of time (a lot more than expected), and blocked the Bartholomea development at the beginning, but now it is working smoothly. Many thanks to all people who have been involved with making Debile work for Tanglu, especially Jon Severinsson. And of course many thanks to the Debile developers for helping with the integration, Sylvestre Ledru and of course Paul Tagliamonte.
Archive Server Migration
Those who read the tanglu-announce mailinglist know this already: We moved the main archive server stuff at archive.tg.o to to a new location, and to a very powerful machine. We also added some additional security measures to it, to prevent attacks.
The previous machine is now being used for the bugtracker at bugs.tg.o and for some other things, including an archive mirror and the new Tanglu User Forums. See more about that below
Transitions
There is huge ongoing work on package transitions. Take a look at our transition tracker and the staging migration log to get a taste of it.
Merging with Debian Unstable is also going on right now, and we are working on merging some of the Tanglu changes which are useful for Debian as well (or which just reduce the diff to Tanglu) back to their upstream packages.
Installer
Work on the Tanglu Live-Installer, although badly needed, has not yet been started (it s a task ready for taking by anyone who likes to do it!) however, some awesome progress has been made in making the Debian-Installer work for Tanglu, which allows us to perform minimal installations of the Tanglu base systems and allows easier support of alternative Tanglu falvours. The work on d-i also uncovered a bug which appeared with the latest version of findutils, which has been reported upstream before Debian could run into it. This awesome progress was possible thanks to the work of Philip Mu kovac and Thomas Funk (in really hard debug sessions).
Tanglu Forums
We finally have the long-awaited Tanglu user forums ready! As discussed in the last meeting, a popular demand on IRC and our mailing lists was a forum or Stackexchange-like service for users to communicate, since many people can work better with that than with mailinglists.
Therefore, the new English TangluUsers forum is now ready at TangluUsers.org. The forum software is in an alpha version though, so we might experience some bugs which haven t been uncovered in the testing period. We will watch how the software performs and then decide if we stick to it or maybe switch to another one. But so far, we are really happy with the Misago Forums, and our usage of it already led to the inclusion of some patches against Misago. It also is actively maintained and has an active community.
Misc Things
KDE
We will ship with at least KDE Applications 4.13, maybe some 4.14 things as well (if we are lucky, since Tanglu will likely be in feature-freeze when this stuff is released). The other KDE parts will remain on their latest version from the 4.x series. For Tanglu 3, we might update KDE SC 4.x to KDE Frameworks 5 and use Plasma 5 though.
GNOME
Due to the lack manpower on the GNOME flavor, GNOME will ship in the same version available in Debian Sid maybe with some stuff pulled from Experimental, where it makes sense. A GNOME flavor is planned to be available.
Common infrastructure
We currently run with systemd 208, but a switch to 210 is planned. Tanglu 2 also targets the X.org server in version 1.16. For more changes, stay tuned. The kernel release for Bartholomea is also not yet decided.
Artwork
Work on the default Tanglu 2 design has started as well any artwork submissions are most welcome!
Tanglu joins the OIN
The Tanglu project is now a proud member (licensee) of the Open Invention Network (OIN), which build a pool of defensive patents to protect the Linux ecosystem from companies who are trying to use patents against Linux. Although the Tanglu community does not fully support the generally positive stance the OIN has about software patents, the OIN effort is very useful and we agree with it s goal. Therefore, Tanglu joined the OIN as licensee.
And that s the stuff for now! If you have further questions, just join us on #tanglu or #tanglu-devel on Freenode, or write to our newly created forum! You can, as always, also subscribe to our mailinglists to get in touch.
I ve been learning about android development over the last few weeks, and I think I m slowly getting the hang of best practices.
It s a bit tough, there s a lot of stuff that s not super Java-idomatic that s become Android-idiomatic, so getting over that stuff has been interesting.
I ve been finding that Android tends to re-implement most things in a similar enough way, but always with some small tweak that feels kinda funny.
It s working well enough, and I m hoping that I can clean up a few android libraries to deal with some of the OpenGov datasets I m interested in.
If anyone has any tips on proper handling of what to make a fragment, and what activities should look like, I d really love posts about that.
It seems like sometimes I have a 1-to-1 mapping of Fragments to Activity, but I want to keep it a fragment for large devices.
Anyway, best practices welcome.
Great time, super well organized by this year s TCamp staff. Really outstanding. Lots of really amazing discussion, and I feel a lot of effort is finally jelling around Open Civic Data, which is an absolute thrill for me.
Can t wait to see what the next few months bring!
I've been using Linode since 2010, and many of
my friends have heard me talk about how big a fan I am of linode. I've
used Debian unstable on all my Linodes, since I often use them as a remote
shell for general purpose Debian development. I've found my linodes to be
indispensable, and I really love Linode.
The Problem
Recently, because of my work on Docker, I was forced
to stop using the Linode kernel in favor of the stock Debian kernel, since
the stock Linode kernel has no aufs support, and the default LVM-based
devicemapper backend can be quite a pain.
I tried loading in btrfs support, and
using that to host the Docker instance backed with btrfs, but it was throwing
errors as well. Stuck with unstable backends, I wanted to use the
aufs backend, which, dispite problems in
aufs internally, is quite stable with Docker (and in general).
I started to run through the Linode Library's guide on PV-Grub,
but that resulted in a cryptic error with xen not understanding the compression
of the kernel. I checked for recent changes to the compresson, and lo, the
Debian kernel has been switched to use xz compression in sid. Awesome news,
really. XZ compression is awesome, and I've been super impressed with how
universally we've adopted it in Debian. Keep it up! However, it appears only
a newer pv-grub than the Linode hosts have installed will fix this.
After contacting the (ever friendly) Linode support, they were unable to give
me a timeline on adding xz support, which would entail upgrading pv-grub. It
was quite disapointing news, to be honest. Workarounds were suggested,
but I'm not quite happy with them as proper solutions.
After asking in #debian-kernel, waldi was
able to give me a few pointers, and the following is very inspired by him,
the only thing that changed much was config tweaking, which was easy enough.
Thanks, Bastian!
The Constraints
I wanted to maintain a 100% stock configuration from the kernel up.
When I upgraded my kernel, I wanted to just work. I didn't want to
unpack and repack the kernel, and I didn't want to install software
outside main on my system. It had to be 100% Debian and unmodified.
The Solution
Left unable to run my own kernel directly in the Linode interface, the tact
here was to use Linode's old pv-grub to chain-load grub-xen, which loaded
a modern kernel. Turns out this works great.
Let's start by creating a config for Linode's pv-grub to read
and use.
sudomkdir-p/boot/grub/
Now, since pv-grub is legacy grub, we can write out the following
config to chain-load in grub-xen (which is just Grub 2.0, as far as I can
tell) to /boot/grub/menu.lst. And to think, I almost forgot all about
menu.lst. Almost.
Next, change your boot configuration to use pv-grub, and give the machine
a kick. Should work great! If you run into issues, use the lish shell to
debug it, and let me know what else I should include in this post!
Hack on!
Hello, World!
I ve deployed my first instance of lenin to my backup VCS (lucifer.pault.ag), and it s going great.
(Screenshot of the first few instances for good measure)
I m excited to see how it develops!
There is a strange bug in Planet Debian I am seeing since
I joined. It is rather minor, but since it is an accessibility
bug, I'd like to mention it here. I have written to
the Planet Debian maintainers, and was told to figure it out myself.
This is a pattern, accessibility is considered wishlist, apparently.
And the affected people are supposed to fix it on their own.
It is better if I don't say anything more about that attitude.
The Bug
On Planet Debian, only some people have an alt tag for their hackergotchi,
while all the configured entries look similar.
There is no obvious difference in the configuration, but still,
only some users here have a proper alt tag for their hackergotchi. Here is a list:
These people/organisations currently displayed on Planet Debian
have a proper alt tag for their hackergotchi. All the other members have none.
In Lynx, it looks like the following:
hackergotchi for
And for those where it works, it looks like:
hackergotchi for Dirk Eddelbuettel
Strange, isn't it? If you have any idea why this might be happening,
let me know, or even better, tell Planet Debian maintainers how to fix it.
P.S.: Package planet-venus says it is a rewrite of Planet, and Planet can
be found in Debian as well. I don't see it in unstable, maybe I am blind?
Or has it been removed recently? If so, the package description
of planet-venus is wrong.
I gave a talk this year at PyCon 2014, about one
of my favorite subjects: Hy. Many of my regular readers
will have no doubt explored Hy's thriving
GitHub org, played with
try-hy, or even installed it locally by
pip installing it. I was lucky enough to
be able to attend PyCon on behalf of Sunlight,
with a solid contingint of my colleagues. We put together a writeup on the
Sunlight blog
if anyone was interested in our favorite talks.
Tons of really amazing questions, and such an amazingly warm reception from
so many of my peers throughout this year's PyCon. Thank you so much to
everyone that attended the talk. As always, you should
Fork Hy on GitHub,
follow @hylang on the twitters, and
send in any bugs you find!
Hopefully I'll be able to put my talk up in blog-post form soon, but until then
feel free to look over the slides or just
watch the talk.
An extra shout-out to @akaptur for hacking on
Hy during the sprints, and giving the exception system
quite the workthrough.
Thanks, Allison!
PyCon 2014 happened. (Sprints are still happening.)
This was my 3rd PyCon, but my first year as a serious contributor to the
event, which led to an incredibly different feel. I also came as a
person running a company building a complex system in Python, and I
loved having the overarching mission of what I'm building driving my
approach to what I chose to do. PyCon is one of the few conferences
I go to where the feeling of acceptance and at-homeness mitigates the
introvert overwhelm at nonstop social interaction. It's truly a special
event and community.
Here are some highlights:
I gave a tutorial about
search,
which was recorded in its entirety... if you watch this video, I
highly recommend skipping the hands-on parts where I'm just walking
around helping people out.
I gave a talk! It's called Subprocess to FFI, and you can find the
video
here.
Through three full iterations of dry runs with feedback, I had a ton of fun
preparing this talk. I'd like to give more like it in the future as I
continue to level up my speaking skills.
Allen Downey came to my talk and found me
later to say hi. Omg amazing, made my day.
Aux Vivres and Dieu du
Ciel, amazing eats and drink with great
new and old friends. Special shout out to old Debian friends Micah
Anderson, Matt Zimmerman, and Antoine Beaupr for a good time at Dieu
du Ciel.
The Geek Feminism open space was a great place to chill out and always
find other women to hang with, much thanks to Liz Henry for organizing
it.
Talking to the community from the Inbox
booth on Startup Row in the Expo hall on Friday. Special thanks for
Don Sheu and Yannick Gingras for making this happen, it was awesome!
The PyLadies lunch. Wow, was that amazing. Not only did I get to meet
Julia Evans (who also liked meeting
me!), but there was an amazing
lineup of amazing women telling everyone about what they're doing.
This and Noami Ceder's
touching talk
about openly transitioning while being a member of the Python
community really show how the community walks the walk when it comes
to diversity and is always improving.
Catching up with old friends like Biella Coleman, Selena Deckelmann,
Deb Nicholson, Paul Tagliamonte, Jessica McKellar, Adam Fletcher, and
even friends from the bay area who I don't see often. It was hard to
walk places without getting too distracted running into people I knew,
I got really good at waving and continuing on my way.
I didn't get to go to a lot of talks in person this year since my
personal schedule was so full, but the PyCon video team is amazing as
usual, so I'm looking forward to checking out the
archive. It really is a
gift to get the videos up while energy from the conference is still so
high and people want to check out things they missed and share the talks
they loved.
Thanks to everyone, hugs, peace out, et cetera!
As some of you know, I ve started to take Sundays off, and this is mostly a post to continue to hold myself accountable, and to assure everyone that yes! That s still going on!
I ve found my Saturdays to be more productive, and my Sundays to be much more enjoyable and stress-free.
For those that don t know, I m not even using any computer through my Sundays (hilariously dubbed paul-tag by my friends (only Germans would find this funny, though :) )), but my tablet and phone seem to be OK (but I actively avoid email)
If anyone s on the fence, I highly recommend doing this, it s really great.
It sucks.
It annoys me to no end that I can t run my own Linux kernel on an Android device. It s very annoying that the android patches haven t been upstreamed, and it s even more annoying that I can t run a host OS that I build lovingly and work on during the nights.
I hate that I can t use my tablet as a porter box with a USB OTG Hard disk.
Alas. One day. One day I ll have a proper armhf device that I can run Debian on.
But that day is not today.
If you re the sort that cares about federation, the MediaGoblin project is for you!
MediaGoblin is a media hosting platform, where you can post all sorts of things, like video, images or even 3d models. It s a nice replacement for things like Flickr, and YouTube, and super easy to set up.
If I wasn t such a lazy person, I d have uploaded it to Debian, but alas. Soon. Soon!
It s an official GNU project, and it s maintainer, Chris is a totally awesome guy, and MediaGoblin is really important work.